05. Attention Overview
SOLUTION:
False - a seq2seq model works by feeding one element of the input sequence at a time to the encoderSOLUTION:
- The fixed size of the context matrix passed from the encoder to the decoder is a bottleneck
- Difficulty of encoding long sequences and recalling long-term dependancies